03. Beyond REINFORCE

Beyond REINFORCE

Here, we briefly review key ingredients of the REINFORCE algorithm.

REINFORCE works as follows: First, we initialize a random policy \pi_\theta(a;s), and using the policy we collect a trajectory -- or a list of (state, actions, rewards) at each time step:

s_1, a_1, r_1, s_2, a_2, r_2, ...

Second, we compute the total reward of the trajectory R=r_1+r_2+r_3+…, and compute an estimate the gradient of the expected reward, g:

g = R \sum_t \nabla_\theta \log\pi_\theta(a_t|s_t)

Third, we update our policy using gradient ascent with learning rate \alpha:

\theta \leftarrow \theta + \alpha g

The process then repeats.

What are the main problems of REINFORCE? There are three issues:

  1. The update process is very inefficient! We run the policy once, update once, and then throw away the trajectory.

  2. The gradient estimate g is very noisy. By chance the collected trajectory may not be representative of the policy.

  3. There is no clear credit assignment. A trajectory may contain many good/bad actions and whether these actions are reinforced depends only on the final total output.

In the following concepts, we will go over ways to improve the REINFORCE algorithm and resolve all 3 issues. All of the improvements will be utilized and implemented in the PPO algorithm.